Stochastic Hill Climbing with Learning by Vectors of Normal Distributions Second Corrected and Enhanced Version

نویسنده

  • Stephan Rudlof
چکیده

This paper describes a stochastic hill climbing algorithm named SHCLVND to optimize arbitrary vectorial < n ! < functions. It needs less parameters. It uses normal (Gaussian) distributions to represent probabilities which are used for generating more and more better argument vectors. The-parameters of the normal distributions are changed by a kind of Hebbian learning. Kvasnicka et al. KPP95] used algorithm Stochastic Hill Climbing with Learning (HCwL) to optimize a highly multimodal vectorial function on real numbers. We have tested proposed algorithm by optimizations of the same and a similar function and show the results in comparison to HCwL. In opposite to it algorithm SHCLVND desribed here works directly on vectors of numbers instead their bit-vector representations and uses normal distributions instead of numbers to represent probabilities. 1 Overview In Section 2 we give an introduction with the way to the algorithm. Then we describe it exactly in Section 3. There is also given a compact notation in pseudo PASCAL-code, see Section 3.4. After that we give an example: we optimize highly multimodal functions with the proposed algorithm and give some visualisations of the progress in Section 4. In Section 5 there are a short summary and some ideas for future works. At last in Section 6 we give some hints for practical use of the algorithm. 2 Introduction This paper describes a hill climbing algorithm to optimize vectorial functions on real numbers. 2.1 Motivation Flexible algorithms for optimizing any vectorial function are interesting if there is no or only a very diicult mathematical solution known, e.g. parameter adjustments to optimize with respect to some relevant property the recalling behavior of a (trained) neuronal net HKP91, Roj93], or the resulting image of some image-processing lter.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Variational Methods for Stochastic Optimization

In the study of graphical models, methods based on the concept of variational freeenergy bounds have been widely used for approximating functionals of probability distributions. In this paper, we provide a method based on the same principles that can be applied to problems of stochastic optimization. In particular, this method is based upon the same principles as the generalized EM algorithm. W...

متن کامل

Hessian Stochastic Ordering in the Family of multivariate Generalized Hyperbolic Distributions and its Applications

In this paper, random vectors following the multivariate generalized hyperbolic (GH) distribution are compared using the hessian stochastic order. This family includes the classes of symmetric and asymmetric distributions by which different behaviors of kurtosis in skewed and heavy tail data can be captured. By considering some closed convex cones and their duals, we derive some necessary and s...

متن کامل

Fuzzy State Aggregation and Policy Hill Climbing for Stochastic Environments

Received (received date) Revised (revised date) Reinforcement learning is one of the more attractive machine learning technologies, due to its unsupervised learning structure and ability to continually learn even as the operating environment changes. Additionally, by applying reinforcement learning to multiple cooperative software agents (a multi-agent system) not only allows each individual ag...

متن کامل

A Reinforcement Learning Approach for Product Delivery by Multiple Vehicles

Real-time delivery of products in the context of stochastic demands and multiple vehicles is a difficult problem, as it requires the joint investigation of the problems in inventory control and vehicle routing. We model this problem in the framework of Average-reward Reinforcement Learning (ARL) and present experimental results on a modelbased ARL algorithm called H-Learning with piecewise line...

متن کامل

Random Structure of Error Surfaces: Toward New Stochastic Learning Methods Invited Presentation

Learning in neural networks can be formulated as global optimization of a multimodal error function that is defined over the high-dimensional space of connection weights. This global optimization is both theoretically intractable [26] [35] and difficult in practice. Traditional learning heuristics, e.g., back-propagation [31] or Boltzmann learning [11], are largely based on gradient methods or ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997